jetson mate
Tutorial: Edge AI with Triton Inference Server, Kubernetes, Jetson Mate
In this tutorial, we will configure and deploy Nvidia Triton Inference Server on the Jetson Mate carrier board to perform inference of computer vision models. It builds on our previous post where I introduced Jetson Mate from Seeed Studio to run the Kubernetes cluster at the edge. Though this tutorial focuses on Jetson Mate, you can use one or more Jetson Nano Developer Kits connected to a network switch to run the Kubernetes cluster. Assuming you have installed and configured JetPack 4.6.x on all the four Jetson Nano 4GB modules, let's start with the installation of K3s. The first step is to turn Nvidia Container Toolkit into the default runtime for Docker.
- Information Technology > Cloud Computing (0.87)
- Information Technology > Artificial Intelligence > Vision (0.56)
Jetson Mate: A Compact Carrier Board for Jetson Nano/NX System-on-Modules
Containers have become the unit of deployment not just for data center and cloud workloads but also for edge applications. Along with containers, Kubernetes has become the foundation of the infrastructure. Distributions such as K3s are fueling the adoption of Kubernetes at the edge. I have seen many challenges when working with large retailers and system integrators rolling out Kubernetes-based edge infrastructure. One of them is the ability to mix and match ARM64 and AMD64 devices to run AI workloads.